Using Bagging and Cross-Validation to Improve Ensembles Based on Penalty Terms
نویسندگان
چکیده
Decorrelated and CELS are two ensembles that modify the learning procedure to increase the diversity among the networks of the ensemble. Although they provide good performance according to previous comparatives, they are not as well known as other alternatives, such as Bagging and Boosting, which modify the learning set in order to obtain classifiers with high performance. In this paper, two different procedures are introduced to Decorrelated and CELS in order to modify the learning set of each individual network and improve their accuracy. The results show that these two ensembles are improved by using the two proposed methodologies as specific set generators.
منابع مشابه
Ensemble strategies to build neural network to facilitate decision making
There are three major strategies to form neural network ensembles. The simplest one is the Cross Validation strategy in which all members are trained with the same training data. Bagging and boosting strategies pro-duce perturbed sample from training data. This paper provides an ideal model based on two important factors: activation function and number of neurons in the hidden layer and based u...
متن کاملMachine Learning Ensembles: An Empirical Study and Novel Approach
Two learning ensemble methods, Bagging and Boosting, have been applied to decision trees to improve classification accuracy over that of a single decision tree learner. We introduce Bagging and propose a variant of it — Improved Bagging — which, in general, outperforms the original bagging algorithm. We experiment on 22 datasets from the UCI repository, with emphasis on the ensemble’s accuracy ...
متن کاملImproving Boosting Methods by Generating Specific Training and Validation Sets
In previous researches it can been seen that Bagging, Boosting and Cross-Validation Committee can provide good performance separately. In this paper, Boosting methods are mixed with Bagging and Cross-Validation Committee in order to generate accurate ensembles and take benefit from all these alternatives. In this way, the networks are trained according to the boosting methods but the specific t...
متن کاملAn Eecient Method to Estimate Bagging's Generalization Error
Bagging [1] is a technique that tries to improve a learning algorithm's performance by using bootstrap replicates of the training set [5, 4]. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive for leave-one-out cross-validation one needs to train the underlying algorithm on the order of m times, where...
متن کاملOn the Integration of Ensembles of Neural Networks: Application to Seismic Signal Classi cation
This paper proposes a classiication scheme based on the integration of multiple Ensembles of ANNs. It is demonstrated on a classiication problem, in which Seismic recordings of Natural Earthquakes must be distinguished from the recordings of Artiicial Explosions. A Redundant Classiication Environment consists of several Ensembles of Neural Networks is created and trained on Bootstrap Sample Set...
متن کامل